speaker detail

Arun Prakash Asokan

Associate Director Data Science

company logo

Arun Prakash Asokan, an esteemed AI thought leader and Intrapreneur, holds over 15 years of experience driving comprehensive AI programs across diverse domains. Recognized as a Scholar of Excellence from the Indian School of Business, he seamlessly integrates academic rigor with practical expertise, holding a Master's in Computer Science Engineering from BITS Pilani and completing an Advanced Management Program from ISB Hyderabad. Arun's passion for building AI products is evident through his leadership in transformative initiatives across industries like banking, marketing, healthcare, and pharma. He spearheads end-to-end AI programs, excels in translating raw problems into AI solutions that align with business goals, and has a proven track record of building end-to-end AI solutions that leverage state-of-the-art techniques. Arun has built several impactful GenAI-powered copilots and products in sensitive enterprise setups, helping numerous businesses achieve success. A Grand Winner of the Tableau International Contest, he pioneers Generative AI technologies, delivering numerous impactful tech talks, webinars, and workshops while also serving as an AI Visiting Faculty and Guest Lecturer, embodying a commitment to education and innovation in AI.

Overview 

In today's rapidly evolving digital landscape, Retrieval-Augmented Generation (RAG) has emerged as a transformative technology, permeating various industries and shaping the future of artificial intelligence. RAG's ability to seamlessly integrate retrieval and generation capabilities has unlocked unprecedented possibilities, revolutionizing how we approach problem-solving, decision-making, and knowledge creation.

RAG is an AI framework that enhances the accuracy and reliability of large language models (LLMs) by allowing them to retrieve relevant information from external knowledge sources before generating a response. This helps ground the LLM's outputs in factual data, reducing the risk of hallucinating incorrect or misleading information.

Big Business Benefits of RAG

  • Access to Reliable Information: RAG ensures LLMs access current, credible facts, allowing users to verify sources.
  • Dynamic Data Integration: RAG integrates new data seamlessly, reducing the need for frequent retraining.
  • Cost Efficiency: RAG lowers computational and financial burdens compared to constant retraining.
  • Improved Response Accuracy: RAG enhances LLMs' ability to recognize and probe for more details, ensuring accurate responses.

Welcome to the "Building GenAI Applications Using RAG" workshop, a comprehensive journey from absolute beginner to advanced RAG application developer. Throughout this immersive experience, participants will progress from foundational concepts to building sophisticated RAG systems, all driven by hands-on learning. 

The workshop begins by demystifying the evolution of Generative AI and clarifying key terms, guiding even novices through the complex landscape. Participants will explore various GenAI approaches, equipping them with the knowledge to make informed decisions for their specific business applications. From there, attendees dive into the heart of RAG, starting with tokenization and advancing to building practical RAG applications using Langchain. Each step is hands-on, ensuring tangible outcomes and solid takeaways. As the day unfolds, participants master advanced retrieval strategies, query expansion, and evaluation techniques, honing their skills for real-world application. Bonus topics on super-advanced concepts await, time permitting. 

Through this workshop, participants emerge with the confidence and capability to navigate the complexities of Generative AI, from theory to application, and get started on a transformative journey toward becoming confident RAG developers.

Prerequisite: Basic knowledge of Python, Fundamentals of Transformers Architecture (nice to have)

Read More

Autonomous AI agents are reshaping the AI landscape by revolutionizing how we interact with technology and the capabilities of LLM-powered AI systems. These agents, which can independently perceive their environment, make decisions, and take actions without human intervention, are becoming increasingly prevalent across various industries and applications.

In this talk, "Agentic AI: The Rise of Autonomous AI Agents and LangGraph," we will delve into the emerging theme of agentic AI and explore the progressive levels of maturity in constructing generative AI applications using large language models (LLMs).

Attendees will be introduced to the world of autonomous AI agents, understanding what an agent is and how it differs from other maturity levels of building a Gen AI application using LLMs like Retrieval-Augmented Generation (RAG) systems. The session will cover the key components and operational principles of agentic AI, highlighting popular frameworks like Chain of Thought (CoT) and ReACT that guide the cognitive processes of autonomous agents. Additionally, we will examine how to enhance RAG applications with agentic RAG, pushing the boundaries of what these systems can achieve.

LangGraph is closely related to agentic AI, providing a framework for defining agentic AI workflows as graphs. Through demonstrations of LangGraph, we will witness the powerful capabilities of these autonomous agents. We will explore LangGraph's functionalities and various architectures that exemplify agentic AI systems, such as Supervisor, Self Reflection, and Human Reflection.

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details